Group Logistic Regression Models with lp,q Regularization
نویسندگان
چکیده
In this paper, we proposed a logistic regression model with lp,q regularization that could give group sparse solution. The be applied to variable-selection problems structures. the context of big data, solutions for practical are often sparse, so it is necessary study kind model. We defined from three perspectives: theoretical, algorithmic and numeric. From theoretical perspective, by introducing notion restricted eigenvalue condition, gave oracle inequality, which was an important property problems. global recovery bound also established regularization. well-known alternating direction method multipliers (ADMM) algorithm solve subproblems ADMM were solved effectively. numerical performed experiments simulated data real in factor stock selection. employed presented paper results presented. found effective terms variable selection prediction.
منابع مشابه
Multiclass Bounded Logistic Regression – Efficient Regularization with Interior Point Method
Logistic regression has been widely used in classification tasks for many years. Its optimization in case of linear separable data has received extensive study due to the problem of a monoton likelihood. This paper presents a new approach, called bounded logistic regression (BLR), by solving the logistic regression as a convex optimization problem with constraints. The paper tests the accuracy ...
متن کاملNonconvex Sparse Logistic Regression with Weakly Convex Regularization
In this work we propose to fit a sparse logistic regression model by a weakly convex regularized nonconvex optimization problem. The idea is based on the finding that a weakly convex function as an approximation of the `0 pseudo norm is able to better induce sparsity than the commonly used `1 norm. For a class of weakly convex sparsity inducing functions, we prove the nonconvexity of the corres...
متن کاملDealing with Separation in Logistic Regression Models
When facing small numbers of observations or rare events, political scientists often encounter separation, in which explanatory variables perfectly predict binary events or non-events. In this situation, maximum likelihood provides implausible estimates and the researcher might want incorporate some form of prior information into the model. The most sophisticated research uses Jeffreys’ invaria...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Mathematics
سال: 2022
ISSN: ['2227-7390']
DOI: https://doi.org/10.3390/math10132227